Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
translated by 谷歌翻译
Explainability of Graph Neural Networks (GNNs) is critical to various GNN applications but remains an open challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a trade-off between the two. To search for the most necessary and sufficient explanation, the Probability of Necessity and Sufficiency (PNS) can be applied since it can mathematically quantify the necessity and sufficiency of an explanation. Nevertheless, the difficulty of obtaining PNS due to non-monotonicity and the challenge of counterfactual estimation limits its wide use. To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound. Specifically, we employ nearest neighbor matching to generate counterfactual samples for the features, which is different from the random perturbation. In particular, NSEG combines the edges and node features to generate an explanation, where the common edge explanation is a special case of the combined explanation. Empirical study shows that NSEG achieves excellent performance in generating the most necessary and sufficient explanations among a series of state-of-the-art methods.
translated by 谷歌翻译
Purpose: Vision-based robot tool segmentation plays a fundamental role in surgical robots and downstream tasks. CaRTS, based on a complementary causal model, has shown promising performance in unseen counterfactual surgical environments in the presence of smoke, blood, etc. However, CaRTS requires over 30 iterations of optimization to converge for a single image due to limited observability. Method: To address the above limitations, we take temporal relation into consideration and propose a temporal causal model for robot tool segmentation on video sequences. We design an architecture named Temporally Constrained CaRTS (TC-CaRTS). TC-CaRTS has three novel modules to complement CaRTS - temporal optimization pipeline, kinematics correction network, and spatial-temporal regularization. Results: Experiment results show that TC-CaRTS requires much fewer iterations to achieve the same or better performance as CaRTS. TC- CaRTS also has the same or better performance in different domains compared to CaRTS. All three modules are proven to be effective. Conclusion: We propose TC-CaRTS, which takes advantage of temporal constraints as additional observability. We show that TC-CaRTS outperforms prior work in the robot tool segmentation task with improved convergence speed on test datasets from different domains.
translated by 谷歌翻译
It has been witnessed that masked image modeling (MIM) has shown a huge potential in self-supervised learning in the past year. Benefiting from the universal backbone vision transformer, MIM learns self-supervised visual representations through masking a part of patches of the image while attempting to recover the missing pixels. Most previous works mask patches of the image randomly, which underutilizes the semantic information that is beneficial to visual representation learning. On the other hand, due to the large size of the backbone, most previous works have to spend much time on pre-training. In this paper, we propose \textbf{Attention-driven Masking and Throwing Strategy} (AMT), which could solve both problems above. We first leverage the self-attention mechanism to obtain the semantic information of the image during the training process automatically without using any supervised methods. Masking strategy can be guided by that information to mask areas selectively, which is helpful for representation learning. Moreover, a redundant patch throwing strategy is proposed, which makes learning more efficient. As a plug-and-play module for masked image modeling, AMT improves the linear probing accuracy of MAE by $2.9\% \sim 5.9\%$ on CIFAR-10/100, STL-10, Tiny ImageNet, and ImageNet-1K, and obtains an improved performance with respect to fine-tuning accuracy of MAE and SimMIM. Moreover, this design also achieves superior performance on downstream detection and segmentation tasks.
translated by 谷歌翻译
Multi-view graph clustering (MGC) methods are increasingly being studied due to the explosion of multi-view data with graph structural information. The critical point of MGC is to better utilize the view-specific and view-common information in features and graphs of multiple views. However, existing works have an inherent limitation that they are unable to concurrently utilize the consensus graph information across multiple graphs and the view-specific feature information. To address this issue, we propose Variational Graph Generator for Multi-View Graph Clustering (VGMGC). Specifically, a novel variational graph generator is proposed to extract common information among multiple graphs. This generator infers a reliable variational consensus graph based on a priori assumption over multiple graphs. Then a simple yet effective graph encoder in conjunction with the multi-view clustering objective is presented to learn the desired graph embeddings for clustering, which embeds the inferred view-common graph and view-specific graphs together with features. Finally, theoretical results illustrate the rationality of VGMGC by analyzing the uncertainty of the inferred consensus graph with information bottleneck principle. Extensive experiments demonstrate the superior performance of our VGMGC over SOTAs.
translated by 谷歌翻译
基于文本的视觉问题回答〜(TextVQA)旨在为具有多个场景文本的图像问题提供正确的答案。在大多数情况下,文本自然附着在物体表面上。因此,文本和对象之间的空间推理在文本VQA中至关重要。但是,现有方法在从输入图像中学到的2D空间信息中受到限制,并依靠基于变压器的体系结构在融合过程中隐含地推理。在此设置下,这些2D空间推理方法无法区分同一图像平面上的视觉对象和场景文本之间的细颗粒空间关系,从而损害了TextVQA模型的可解释性和性能。在本文中,我们将3D几何信息引入了类似人类的空间推理过程,以逐步捕获关键对象的上下文知识。 %我们通过引入3D几何信息来捕获关键对象的上下文知识来制定类似人类的空间推理过程。为了增强模型对3D空间关系的理解,特别是(i)〜我们提出了一个关系预测模块,以准确定位关键对象的关注区域; (ii)〜我们设计了一个深度感知的注意校准模块,以根据关键对象校准OCR令牌的注意力。广泛的实验表明,我们的方法在TextVQA和ST-VQA数据集上实现了最先进的性能。更令人鼓舞的是,我们的模型在涉及TextVQA和ST-VQA有效拆分中的空间推理的问题上以5.7 \%和12.1 \%的明显边缘超过了他人。此外,我们还验证了模型对基于文本的图像字幕任务的普遍性。
translated by 谷歌翻译
尽管变形金刚及其变体构象体在语音识别方面表现出了有希望的表现,但参数化的属性在训练和推理过程中导致了很大的记忆成本。一些作品使用跨层重量分享来减少模型的参数。但是,不可避免的能力损失会损害模型性能。为了解决这个问题,本文提出了通过共享稀疏门控专家的参数效率构象异构体。具体而言,我们使用稀疏门控的专家(MOE)来扩展构型块的容量而不增加计算。然后,共享分组构象块的参数,以减少参数的数量。接下来,为了确保具有不同级别适应表示的灵活性的共享块,我们会单独设计MOE路由器和标准化。此外,我们使用知识蒸馏来进一步提高性能。实验结果表明,与全参数模型相比,所提出的模型用编码器的1/3来实现竞争性能。
translated by 谷歌翻译
最近已经设计了一些轻巧的卷积神经网络(CNN)模型,用于遥感对象检测(RSOD)。但是,他们中的大多数只是用可分离的卷积代替了香草卷积,这可能是由于很多精确损失而无法有效的,并且可能无法检测到方向的边界框(OBB)。同样,现有的OBB检测方法很难准确限制CNN预测的对象的形状。在本文中,我们提出了一个有效的面向轻质对象检测器(LO-DET)。具体而言,通道分离聚集(CSA)结构旨在简化可分开的卷积的复杂性,并开发了动态的接收场(DRF)机制,以通过自定义卷积内核及其感知范围来保持高精度,以保持高精度。网络复杂性。 CSA-DRF组件在保持高精度的同时优化了效率。然后,对角支撑约束头(DSC-Head)组件旨在检测OBB,并更准确,更稳定地限制其形状。公共数据集上的广泛实验表明,即使在嵌入式设备上,拟议的LO-DET也可以非常快地运行,具有检测方向对象的竞争精度。
translated by 谷歌翻译
已经进行了一项详尽的研究,以研究基于跨度的联合实体和关系提取任务的模型。但是,这些模型在模型训练过程中采样了大量的负实体和负关系,这是必不可少的,但导致数据分布严重不平衡,进而导致次优模型性能。为了解决上述问题,我们为基于跨度的联合实体和关系提取提出了两个阶段范式,其中涉及在第一阶段对实体和关系进行分类,并预测第二阶段的这些实体和关系的类型阶段。两阶段范式使我们的模型能够显着缩小数据分布差距,包括负实体与其他实体之间的差距,以及负面关系与其他关系之间的差距。此外,我们首次尝试将实体类型和实体距离与全球特征相结合,这已被证明有效,尤其是对于关系提取而言。几个数据集的实验结果表明,基于两阶段范式的基于跨度的联合提取模型增强,全局功能始终优于先前用于联合提取任务的基于最新的跨度模型,并建立了新的标准基准。定性和定量分析进一步验证了提出的范式和全球特征的有效性。
translated by 谷歌翻译